656 research outputs found
Three-way Imbalanced Learning based on Fuzzy Twin SVM
Three-way decision (3WD) is a powerful tool for granular computing to deal
with uncertain data, commonly used in information systems, decision-making, and
medical care. Three-way decision gets much research in traditional rough set
models. However, three-way decision is rarely combined with the currently
popular field of machine learning to expand its research. In this paper,
three-way decision is connected with SVM, a standard binary classification
model in machine learning, for solving imbalanced classification problems that
SVM needs to improve. A new three-way fuzzy membership function and a new fuzzy
twin support vector machine with three-way membership (TWFTSVM) are proposed.
The new three-way fuzzy membership function is defined to increase the
certainty of uncertain data in both input space and feature space, which
assigns higher fuzzy membership to minority samples compared with majority
samples. To evaluate the effectiveness of the proposed model, comparative
experiments are designed for forty-seven different datasets with varying
imbalance ratios. In addition, datasets with different imbalance ratios are
derived from the same dataset to further assess the proposed model's
performance. The results show that the proposed model significantly outperforms
other traditional SVM-based methods
Understanding the thermal implications of multicore architectures
Multicore architectures are becoming the main design paradigm for current and future processors. The main reason is that multicore designs provide an effective way of overcoming instruction-level parallelism (ILP) limitations by exploiting thread-level parallelism (TLP). In addition, it is a power and complexity-effective way of taking advantage of the huge number of transistors that can be integrated on a chip. On the other hand, today's higher than ever power densities have made temperature one of the main limitations of microprocessor evolution. Thermal management in multicore architectures is a fairly new area. Some works have addressed dynamic thermal management in bi/quad-core architectures. This work provides insight and explores different alternatives for thermal management in multicore architectures with 16 cores. Schemes employing both energy reduction and activity migration are explored and improvements for thread migration schemes are proposed.Peer ReviewedPostprint (published version
A software-hardware hybrid steering mechanism for clustered microarchitectures
Clustered microarchitectures provide a promising paradigm to solve or alleviate the problems of increasing microprocessor complexity and wire delays. High- performance out-of-order processors rely on hardware-only steering mechanisms to achieve balanced workload distribution among clusters. However, the additional steering logic results in a significant increase on complexity, which actually decreases the benefits of the clustered design. In this paper, we address this complexity issue and present a novel software-hardware hybrid steering mechanism for out-of-order processors. The proposed software- hardware cooperative scheme makes use of the concept of virtual clusters. Instructions are distributed to virtual clusters at compile time using static properties of the program such as data dependences. Then, at runtime, virtual clusters are mapped into physical clusters by considering workload information. Experiments using SPEC CPU2000 benchmarks show that our hybrid approach can achieve almost the same performance as a state-of-the-art hardware-only steering scheme, while requiring low hardware complexity. In addition, the proposed mechanism outperforms state-of-the-art software-only steering mechanisms by 5% and 10% on average for 2-cluster and 4-cluster machines, respectively.Peer ReviewedPostprint (published version
Enhanced dendrite nucleation and Li-clustering at vacancies on graphene
An ever present challenge for Li-ion batteries is the formation of metallic
dendrites on cycling that dramatically reduces cycle life and leads to the
untimely failure of the cell. In this work we investigate the modes of
Li-cluster formation on pristine and defective graphene. Firstly, we
demonstrate that on a defect free surface the cluster formation is impeded by
the thermodynamic instability of \ce{Li_2} and \ce{Li_3} clusters. In contrast,
the presence of a vacancy dramatically favours clustering. This provides
insights into the two modes of Li-growth observed: for the pristine basal plane
if the Li-Li repulsion of the small clusters can be overcome then plating type
behaviour would be predicted (rate / voltage dependent and at any point on the
surface); whilst dendritic growth would be predicted to nucleate from vacancy
sites, either pre-existing in the material or formed as a result of processing
A Longitudinal Study of Identifying and Paying Down Architectural Debt
Architectural debt is a form of technical debt that derives from the gap
between the architectural design of the system as it "should be" compared to
"as it is". We measured architecture debt in two ways: 1) in terms of
system-wide coupling measures, and 2) in terms of the number and severity of
architectural flaws. In recent work it was shown that the amount of
architectural debt has a huge impact on software maintainability and evolution.
Consequently, detecting and reducing the debt is expected to make software more
amenable to change. This paper reports on a longitudinal study of a healthcare
communications product created by Brightsquid Secure Communications Corp. This
start-up company is facing the typical trade-off problem of desiring
responsiveness to change requests, but wanting to avoid the ever-increasing
effort that the accumulation of quick-and-dirty changes eventually incurs. In
the first stage of the study, we analyzed the status of the "before" system,
which indicated the impacts of change requests. This initial study motivated a
more in-depth analysis of architectural debt. The results of this analysis were
used to motivate a comprehensive refactoring of the software system. The third
phase of the study was a follow-on architectural debt analysis which quantified
the improvements made. Using this quantitative evidence, augmented by
qualitative evidence gathered from in-depth interviews with Brightsquid's
architects, we present lessons learned about the costs and benefits of paying
down architecture debt in practice.Comment: Submitted to ICSE-SEIP 201
Profile-guided redundancy elimination
Program optimisations analyse and transform the programs such
that better performance results can be achieved. Classical optimisations
mainly use the static properties of the programs to analyse program
code and make sure that the optimisations work for every possible
combination of the program and the input data. This approach
is conservative in those cases when the programs show the same runtime
behaviours for most of their execution time. On the other hand,
profile-guided optimisations use runtime profiling information to discover
the aforementioned common behaviours of the programs and explore
more optimisation opportunities, which are missed in the classical,
non-profile-guided optimisations. Redundancy elimination is one of the
most powerful optimisations in compilers. In this thesis, a new partial
redundancy elimination (PRE) algorithm and a partial dead code elimination
algorithm (PDE) are proposed for a profile-guided redundancy
elimination framework. During the design and implementation of the
algorithms, we address three critical issues: optimality, feasibility and
profitability.
First, we prove that both our speculative PRE algorithm and our
region-based PDE algorithm are optimal for given edge profiling information.
The total number of dynamic occurrences of redundant expressions
or dead codes cannot be further eliminated by any other code
motion. Moreover, our speculative PRE algorithm is lifetime optimal,
which means that the lifetimes of new introduced temporary variables
are minimised.
Second, we show that both algorithms are practical and can be efficiently
implemented in production compilers. For SPEC CPU2000
benchmarks, the average compilation overhead for our PRE algorithm
is 3%, and the average overhead for our PDE algorithm is less than 2%.
Moreover, edge profiling rather than expensive path profiling is sufficient
to guarantee the optimality of the algorithms.
Finally, we demonstrate that the proposed profile-guided redundancy
elimination techniques can provide speedups on real machines by conducting
a thorough performance evaluation. To the best of our knowledge,
this is the first performance evaluation of the profile-guided redundancy
elimination techniques on real machines
- …